Serveur d'exploration sur Mozart

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Predicting the similarity between expressive performances of music from measurements of tempo and dynamics.

Identifieur interne : 001900 ( Main/Exploration ); précédent : 001899; suivant : 001901

Predicting the similarity between expressive performances of music from measurements of tempo and dynamics.

Auteurs : Renee Timmers [Autriche]

Source :

RBID : pubmed:15704432

English descriptors

Abstract

Measurements of tempo and dynamics from audio files or MIDI data are frequently used to get insight into a performer's contribution to music. The measured variations in tempo and dynamics are often represented in different formats by different authors. Few systematic comparisons have been made between these representations. Moreover, it is unknown what data representation comes closest to subjective perception. The reported study tests the perceptual validity of existing data representations by comparing their ability to explain the subjective similarity between pairs of performances. In two experiments, 40 participants rated the similarity between performances of a Chopin prelude and a Mozart sonata. Models based on different representations of the tempo and dynamics of the performances were fitted to these similarity ratings. The results favor other data representations of performances than generally used, and imply that comparisons between performances are made perceptually in a different way than often assumed. For example, the best fit was obtained with models based on absolute tempo and absolute tempo times loudness, while conventional models based on normalized variations, or on correlations between tempo profiles and loudness profiles, did not explain the similarity ratings well.

PubMed: 15704432


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Predicting the similarity between expressive performances of music from measurements of tempo and dynamics.</title>
<author>
<name sortKey="Timmers, Renee" sort="Timmers, Renee" uniqKey="Timmers R" first="Renee" last="Timmers">Renee Timmers</name>
<affiliation wicri:level="3">
<nlm:affiliation>Austrian Research Institute for Artificial Intelligence, 1010 Vienna, Austria. renee.timmers@kcl.ac.uk</nlm:affiliation>
<country xml:lang="fr">Autriche</country>
<wicri:regionArea>Austrian Research Institute for Artificial Intelligence, 1010 Vienna</wicri:regionArea>
<placeName>
<region type="land" nuts="2">Vienne (Autriche)</region>
<settlement type="city">Vienne (Autriche)</settlement>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2005">2005</date>
<idno type="RBID">pubmed:15704432</idno>
<idno type="pmid">15704432</idno>
<idno type="wicri:Area/PubMed/Corpus">000179</idno>
<idno type="wicri:Area/PubMed/Curation">000179</idno>
<idno type="wicri:Area/PubMed/Checkpoint">000165</idno>
<idno type="wicri:Area/Ncbi/Merge">000171</idno>
<idno type="wicri:Area/Ncbi/Curation">000171</idno>
<idno type="wicri:Area/Ncbi/Checkpoint">000171</idno>
<idno type="wicri:doubleKey">0001-4966:2005:Timmers R:predicting:the:similarity</idno>
<idno type="wicri:Area/Main/Merge">001925</idno>
<idno type="wicri:Area/Main/Curation">001900</idno>
<idno type="wicri:Area/Main/Exploration">001900</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Predicting the similarity between expressive performances of music from measurements of tempo and dynamics.</title>
<author>
<name sortKey="Timmers, Renee" sort="Timmers, Renee" uniqKey="Timmers R" first="Renee" last="Timmers">Renee Timmers</name>
<affiliation wicri:level="3">
<nlm:affiliation>Austrian Research Institute for Artificial Intelligence, 1010 Vienna, Austria. renee.timmers@kcl.ac.uk</nlm:affiliation>
<country xml:lang="fr">Autriche</country>
<wicri:regionArea>Austrian Research Institute for Artificial Intelligence, 1010 Vienna</wicri:regionArea>
<placeName>
<region type="land" nuts="2">Vienne (Autriche)</region>
<settlement type="city">Vienne (Autriche)</settlement>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">The Journal of the Acoustical Society of America</title>
<idno type="ISSN">0001-4966</idno>
<imprint>
<date when="2005" type="published">2005</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Acoustics (instrumentation)</term>
<term>Adult</term>
<term>Auditory Perception</term>
<term>Female</term>
<term>Forecasting</term>
<term>Humans</term>
<term>Judgment</term>
<term>Loudness Perception</term>
<term>Male</term>
<term>Middle Aged</term>
<term>Models, Theoretical</term>
<term>Music</term>
<term>Periodicity</term>
</keywords>
<keywords scheme="MESH" qualifier="instrumentation" xml:lang="en">
<term>Acoustics</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Adult</term>
<term>Auditory Perception</term>
<term>Female</term>
<term>Forecasting</term>
<term>Humans</term>
<term>Judgment</term>
<term>Loudness Perception</term>
<term>Male</term>
<term>Middle Aged</term>
<term>Models, Theoretical</term>
<term>Music</term>
<term>Periodicity</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Measurements of tempo and dynamics from audio files or MIDI data are frequently used to get insight into a performer's contribution to music. The measured variations in tempo and dynamics are often represented in different formats by different authors. Few systematic comparisons have been made between these representations. Moreover, it is unknown what data representation comes closest to subjective perception. The reported study tests the perceptual validity of existing data representations by comparing their ability to explain the subjective similarity between pairs of performances. In two experiments, 40 participants rated the similarity between performances of a Chopin prelude and a Mozart sonata. Models based on different representations of the tempo and dynamics of the performances were fitted to these similarity ratings. The results favor other data representations of performances than generally used, and imply that comparisons between performances are made perceptually in a different way than often assumed. For example, the best fit was obtained with models based on absolute tempo and absolute tempo times loudness, while conventional models based on normalized variations, or on correlations between tempo profiles and loudness profiles, did not explain the similarity ratings well.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>Autriche</li>
</country>
<region>
<li>Vienne (Autriche)</li>
</region>
<settlement>
<li>Vienne (Autriche)</li>
</settlement>
</list>
<tree>
<country name="Autriche">
<region name="Vienne (Autriche)">
<name sortKey="Timmers, Renee" sort="Timmers, Renee" uniqKey="Timmers R" first="Renee" last="Timmers">Renee Timmers</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/MozartV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001900 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 001900 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    MozartV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:15704432
   |texte=   Predicting the similarity between expressive performances of music from measurements of tempo and dynamics.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:15704432" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a MozartV1 

Wicri

This area was generated with Dilib version V0.6.20.
Data generation: Sun Apr 10 15:06:14 2016. Site generation: Tue Feb 7 15:40:35 2023